1,527 research outputs found

    Examining the Effect of Religion on Economic Growth: A Partial Replication and Extension

    Get PDF
    Economic growth is the fundamental measurement that assesses a country’s productive capacity in terms of goods and services. It is conventionally estimated using the percent rate of increase in GDP per capita and is correlated with numerous factors in society, among which include quality of life. For example, one application of GDP per capita is as a primary indicator of standard of living. However, although GDP per capita is a reliable determinant of the level of development in a country, it is certainly not the only way to measure well-being. For instance, it fails to capture many important aspects of human welfare including health, education, and culture. Religion is a prominent dimension of culture that can be a significant factor in one’s quality of life. However, it is often overlooked as a potential determinant of economic growth. Economists have been trying to fill this gap. In the early 1970’s, Simon Kuznets, winner of the Nobel Prize in Economics in 1971, wrote an article highlighting his findings and reflections of modern economic growth. Of the six characteristics of modern economic growth that he recognized, secularization was cited as a means of changing ideology in society over time and thus as an indirect cause of economic growth (Kuznets 1973). In this case, secularization is a separation of a society from religious or spiritual values or influence. A result is the restriction of the role of religion in modern societies. This project comprises an exploration of the relationship between monthly attendance at church services and economic growth across several countries. The goal of my research project is to partially replicate the findings of two leading authors in this field, Robert J. Barro and Rachel M. McCleary (2003). I will also extend their work to cover the time period from 1999-2012, as they examined data from 1981 to 1999

    Memory Performance in Children with Temporal Lobe Epilepsy: Neocortical vs. Dual Pathologies

    Get PDF
    This study investigated memory in children with temporal lobe epilepsy and the ability to discern hippocampal dysfunction with conventional memory tests that are typically used to detect more global memory impairment. All data was obtained retrospectively from the epilepsy surgery program at a local children’s hospital. The research population consisted of 54 children with intractable epilepsy of temporal onset, balanced across pathology types (with and without hippocampal disease) and other demographics. Each was given a clinical battery prior to surgical intervention, which included the WRAML/WRAML2 Verbal Learning subtest from which the dependent variables for this study were extracted. The research hypothesis had predicted that memory retention between verbal learning and recall would be worse for participants with pathology that included hippocampal sclerosis than for those with non-hippocampal temporal lobe pathology. A two-way mixed-design ANOVA was used to test the hypothesis, which allowed incorporation of variables of interest related to memory factors, pathology type, and hemispheric laterality, as well as their various interactions. There was a significant main effect for change in the number of words retained from the final learning trial to the delayed recall. Although the interaction between memory retention and pathology type was not statistically significant, the average of the memory scores as it related to pathology by side did show significance. Thus, results did not support the hypothetical relationship between retention and hippocampal function. However, additional exploratory analyses revealed that the final learning trial by itself was associated with hippocampal pathology, which applied only to those participants with left-hemisphere lesions. Logistic regression with the final learning trial correctly classified 74 percent of participants into the appropriate pathology category, with 81 percent sensitivity to hippocampal dysfunction. Mean participant memory scores were nearly one standard deviation below the normative mean for both delayed recall and total learning scaled scores, regardless of pathology type or lesion hemisphericity. Thus, while the conventionally used indices of the WRAML Verbal Learning test are useful for determining overall memory status, they are not specific to pathological substrate. The within-subject main effect showed an expected loss of information across the time of the delay, but overall the recall score showed no association with hippocampal functioning. This study revealed the possibility of measuring hippocampal function at statistically significant group levels using learning scores from a widely used measure of verbal memory, even in participants with intact contralateral mesial temporal structures. It also indicated that hippocampal structures do not play a role during recall measures given after a standard time delay. Data further demonstrated a role of the hippocampus for encoding and transferring information beyond short term/working memory into long term. During the learning process, the hippocampus appears to work in concert with short-term memory systems, but does not take over the encoding process until enough repetitions have occurred to saturate the working memory buffer. This research represents a small, yet important step forward in our understanding of the hippocampus, with potentially important implications for the future study of memory constructs and mensuration

    Dynamic and Multi-functional Labeling Schemes

    Full text link
    We investigate labeling schemes supporting adjacency, ancestry, sibling, and connectivity queries in forests. In the course of more than 20 years, the existence of logn+O(loglog)\log n + O(\log \log) labeling schemes supporting each of these functions was proven, with the most recent being ancestry [Fraigniaud and Korman, STOC '10]. Several multi-functional labeling schemes also enjoy lower or upper bounds of logn+Ω(loglogn)\log n + \Omega(\log \log n) or logn+O(loglogn)\log n + O(\log \log n) respectively. Notably an upper bound of logn+5loglogn\log n + 5\log \log n for adjacency+siblings and a lower bound of logn+loglogn\log n + \log \log n for each of the functions siblings, ancestry, and connectivity [Alstrup et al., SODA '03]. We improve the constants hidden in the OO-notation. In particular we show a logn+2loglogn\log n + 2\log \log n lower bound for connectivity+ancestry and connectivity+siblings, as well as an upper bound of logn+3loglogn+O(logloglogn)\log n + 3\log \log n + O(\log \log \log n) for connectivity+adjacency+siblings by altering existing methods. In the context of dynamic labeling schemes it is known that ancestry requires Ω(n)\Omega(n) bits [Cohen, et al. PODS '02]. In contrast, we show upper and lower bounds on the label size for adjacency, siblings, and connectivity of 2logn2\log n bits, and 3logn3 \log n to support all three functions. There exist efficient adjacency labeling schemes for planar, bounded treewidth, bounded arboricity and interval graphs. In a dynamic setting, we show a lower bound of Ω(n)\Omega(n) for each of those families.Comment: 17 pages, 5 figure

    An Adaptation To Life In Acid Through A Novel Mevalonate Pathway.

    Get PDF
    Extreme acidophiles are capable of growth at pH values near zero. Sustaining life in acidic environments requires extensive adaptations of membranes, proton pumps, and DNA repair mechanisms. Here we describe an adaptation of a core biochemical pathway, the mevalonate pathway, in extreme acidophiles. Two previously known mevalonate pathways involve ATP dependent decarboxylation of either mevalonate 5-phosphate or mevalonate 5-pyrophosphate, in which a single enzyme carries out two essential steps: (1) phosphorylation of the mevalonate moiety at the 3-OH position and (2) subsequent decarboxylation. We now demonstrate that in extreme acidophiles, decarboxylation is carried out by two separate steps: previously identified enzymes generate mevalonate 3,5-bisphosphate and a new decarboxylase we describe here, mevalonate 3,5-bisphosphate decarboxylase, produces isopentenyl phosphate. Why use two enzymes in acidophiles when one enzyme provides both functionalities in all other organisms examined to date? We find that at low pH, the dual function enzyme, mevalonate 5-phosphate decarboxylase is unable to carry out the first phosphorylation step, yet retains its ability to perform decarboxylation. We therefore propose that extreme acidophiles had to replace the dual-purpose enzyme with two specialized enzymes to efficiently produce isoprenoids in extremely acidic environments

    Colorful Strips

    Full text link
    Given a planar point set and an integer kk, we wish to color the points with kk colors so that any axis-aligned strip containing enough points contains all colors. The goal is to bound the necessary size of such a strip, as a function of kk. We show that if the strip size is at least 2k12k{-}1, such a coloring can always be found. We prove that the size of the strip is also bounded in any fixed number of dimensions. In contrast to the planar case, we show that deciding whether a 3D point set can be 2-colored so that any strip containing at least three points contains both colors is NP-complete. We also consider the problem of coloring a given set of axis-aligned strips, so that any sufficiently covered point in the plane is covered by kk colors. We show that in dd dimensions the required coverage is at most d(k1)+1d(k{-}1)+1. Lower bounds are given for the two problems. This complements recent impossibility results on decomposition of strip coverings with arbitrary orientations. Finally, we study a variant where strips are replaced by wedges

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    A Time-Space Tradeoff for Triangulations of Points in the Plane

    Get PDF
    In this paper, we consider time-space trade-offs for reporting a triangulation of points in the plane. The goal is to minimize the amount of working space while keeping the total running time small. We present the first multi-pass algorithm on the problem that returns the edges of a triangulation with their adjacency information. This even improves the previously best known random-access algorithm

    Universal Reconfiguration of Facet-Connected Modular Robots by Pivots: The O(1) Musketeers

    Get PDF
    We present the first universal reconfiguration algorithm for transforming a modular robot between any two facet-connected square-grid configurations using pivot moves. More precisely, we show that five extra "helper" modules ("musketeers") suffice to reconfigure the remaining n modules between any two given configurations. Our algorithm uses O(n^2) pivot moves, which is worst-case optimal. Previous reconfiguration algorithms either require less restrictive "sliding" moves, do not preserve facet-connectivity, or for the setting we consider, could only handle a small subset of configurations defined by a local forbidden pattern. Configurations with the forbidden pattern do have disconnected reconfiguration graphs (discrete configuration spaces), and indeed we show that they can have an exponential number of connected components. But forbidding the local pattern throughout the configuration is far from necessary, as we show that just a constant number of added modules (placed to be freely reconfigurable) suffice for universal reconfigurability. We also classify three different models of natural pivot moves that preserve facet-connectivity, and show separations between these models

    Influenza epidemiology, vaccine coverage and vaccine effectiveness in sentinel Australian hospitals in 2013: the Influenza Complications Alert Network

    Get PDF
    The National Influenza Program aims to reduce serious morbidity and mortality from influenza by providing public funding for vaccination to at-risk groups. The Influenza Complications Alert Network (FluCAN) is a sentinel hospital-based surveillance program that operates at 14 sites in all states and territories in Australia. This report summarises the epidemiology of hospitalisations with confirmed influenza, estimates vaccine coverage and influenza vaccine protection against hospitalisation with influenza during the 2013 influenza season. In this observational study, cases were defined as patients admitted to one of the sentinel hospitals, with influenza confirmed by nucleic acid testing. Controls were patients who had acute respiratory illnesses who were test-negative for influenza. Vaccine effectiveness was estimated as 1 minus the odds ratio of vaccination in case patients compared with control patients, after adjusting for known confounders. During the period 5 April to 31 October 2012, 631 patients were admitted with confirmed influenza at the 14 FluCAN sentinel hospitals. Of these, 31% were more than 65 years of age, 9.5% were Indigenous Australians, 4.3% were pregnant and 77% had chronic co-morbidities. Influenza B was detected in 30% of patients. Vaccination coverage was estimated at 81% in patients more than 65 years of age but only 49% in patients aged less than 65 years with chronic comorbidities. Vaccination effectiveness against hospitalisation with influenza was estimated at 50% (95% confidence interval: 33%, 63%, P<0.001). We detected a significant number of hospital admissions with confirmed influenza in a national observational study. Vaccine coverage was incomplete in at-risk groups, particularly non-elderly patients with medical comorbidities. Our results suggest that the seasonal influenza vaccine was moderately protective against hospitalisation with influenza in the 2013 season. This work i
    corecore